162 research outputs found
Adaptive Regret Minimization in Bounded-Memory Games
Online learning algorithms that minimize regret provide strong guarantees in
situations that involve repeatedly making decisions in an uncertain
environment, e.g. a driver deciding what route to drive to work every day.
While regret minimization has been extensively studied in repeated games, we
study regret minimization for a richer class of games called bounded memory
games. In each round of a two-player bounded memory-m game, both players
simultaneously play an action, observe an outcome and receive a reward. The
reward may depend on the last m outcomes as well as the actions of the players
in the current round. The standard notion of regret for repeated games is no
longer suitable because actions and rewards can depend on the history of play.
To account for this generality, we introduce the notion of k-adaptive regret,
which compares the reward obtained by playing actions prescribed by the
algorithm against a hypothetical k-adaptive adversary with the reward obtained
by the best expert in hindsight against the same adversary. Roughly, a
hypothetical k-adaptive adversary adapts her strategy to the defender's actions
exactly as the real adversary would within each window of k rounds. Our
definition is parametrized by a set of experts, which can include both fixed
and adaptive defender strategies.
We investigate the inherent complexity of and design algorithms for adaptive
regret minimization in bounded memory games of perfect and imperfect
information. We prove a hardness result showing that, with imperfect
information, any k-adaptive regret minimizing algorithm (with fixed strategies
as experts) must be inefficient unless NP=RP even when playing against an
oblivious adversary. In contrast, for bounded memory games of perfect and
imperfect information we present approximate 0-adaptive regret minimization
algorithms against an oblivious adversary running in time n^{O(1)}.Comment: Full Version. GameSec 2013 (Invited Paper
A New Approximate Min-Max Theorem with Applications in Cryptography
We propose a novel proof technique that can be applied to attack a broad
class of problems in computational complexity, when switching the order of
universal and existential quantifiers is helpful. Our approach combines the
standard min-max theorem and convex approximation techniques, offering
quantitative improvements over the standard way of using min-max theorems as
well as more concise and elegant proofs
Optimal Depth, Very Small Size Circuits for Symmetrical Functions in AC0
AbstractIt is well known which symmetric Boolean functions can be computed by constant depth, polynomial size, unbounded fan-in circuits, i.e., which are contained in the complexity class AC0. This result is sharpened. Symmetric Boolean functions in AC0 can be computed by unbounded fan-in circuits with the following properties. If the optimal depth of AC0-circuits is d, the depth is at most d + 2, the number of wires is almost linear, namely n logO(1)n, and the number of gates is subpolynomial (but superpolylogarithmic), namely 2O(logδn) for some δ < 1
Modulus Computational Entropy
The so-called {\em leakage-chain rule} is a very important tool used in many
security proofs. It gives an upper bound on the entropy loss of a random
variable in case the adversary who having already learned some random
variables correlated with , obtains some further
information about . Analogously to the information-theoretic
case, one might expect that also for the \emph{computational} variants of
entropy the loss depends only on the actual leakage, i.e. on .
Surprisingly, Krenn et al.\ have shown recently that for the most commonly used
definitions of computational entropy this holds only if the computational
quality of the entropy deteriorates exponentially in
. This means that the current standard definitions
of computational entropy do not allow to fully capture leakage that occurred
"in the past", which severely limits the applicability of this notion.
As a remedy for this problem we propose a slightly stronger definition of the
computational entropy, which we call the \emph{modulus computational entropy},
and use it as a technical tool that allows us to prove a desired chain rule
that depends only on the actual leakage and not on its history. Moreover, we
show that the modulus computational entropy unifies other,sometimes seemingly
unrelated, notions already studied in the literature in the context of
information leakage and chain rules. Our results indicate that the modulus
entropy is, up to now, the weakest restriction that guarantees that the chain
rule for the computational entropy works. As an example of application we
demonstrate a few interesting cases where our restricted definition is
fulfilled and the chain rule holds.Comment: Accepted at ICTS 201
Effects of Feeding a Finishing Diet Blended with Different Phases of Nursery Diets on Growth Performance and Economics of Nursery Pigs
A total of 1,260 weaned pigs (PIC TR4 × (Fast LW × PIC L02); initially 12.9 lb BW)) were housed in a commercial research barn and used in a 47-d study to determine the effects of blending a finishing diet into different phases of nursery diets on pig growth performance. Pens of pigs were blocked by initial BW and gender and allotted to 1 of 4 treatment groups (15 pens/treatment). In a 5-phase feeding program, the 4 treatments were: 1) standard nursery diets throughout (control); or standard nursery diets with 5.5 lb/pig of late finishing feed blended at the beginning of 2) Phase 2; 3) Phase 3; or 4) Phase 4. Phase changes were based on feed budgets. From d 0 to 7, all pigs received the same standard Phase 1 diet and had similar growth performance. Compared with pigs from control, blending finishing feed into the Phase 2 period resulted in poorer (P \u3c 0.01) ADG, ADFI, and F/G from d 7 to 14, poorer (P = 0.025) F/G from d 21 to 28, decreased (P = 0.028) ADG from d 28 to 35, and decreased (P \u3c 0.05) ADFI and F/G from d 35 to 47. Blending finishing feed during Phase 3 resulted in worsened (P \u3c 0.001) ADG and F/G from d 14 to 21, decreased (P = 0.010) ADG from d 21 to 28, and lower (P \u3c 0.05) ADFI and F/G from d 35 to 47 compared with control pigs. Pigs that received blended diet in Phase 4 had impaired (P \u3c 0.001) ADG and F/G from d 21 to 28, but had improved (P = 0.010) F/G from d 35 to 47. Overall (d 0 to 47), blending the finishing diet into Phase 2 decreased (P \u3c 0.05) ADG, ADFI, and final BW, but did not affect F/G compared with control pigs or pigs that had finishing feed blended into the Phase 4. Blending finishing feed into Phase 3 or 4 did not influence overall growth performance. Pigs that had finishing feed blended into Phase 2 or 3 had lower (P \u3c 0.05) overall feed costs than pigs from control and Phase 4 blending treatments. Gain value was decreased (P \u3c 0.05) when finishing feed was blended into Phase 2 compared with the control or when feed was blending into Phase 4. However, no differences in feed cost per lb of gain and only numerical differences in income over feed cost were observed among the treatments. In conclusion, feeding finishing feed in early nursery phase negatively affected pig growth performance; however, blending approximately 5.5 lb/pig finishing feed into nursery diets for pigs greater than 22 lb BW did not affect overall growth performance
Effects of Feeding Increasing Amounts of Finishing Diet Blended with Nursery Diets on Growth Performance and Economics of Nursery Pigs
A total of 1,260 pigs [PIC TR4 × (Fast LW × PIC L02); initial body weight (BW) 23.3 lb] were housed in two commercial research rooms and used in a 28-d study to determine the effects of blending increasing amounts of finishing feed into phase 3 nursery diets on pig growth performance. At weaning, pigs were placed into pens with 21 pigs per pen and 30 pens per room. Pigs were fed commercial nursery diets in a 5-phase feeding program with phases 1 and 2 fed before the start of the experiment. At the beginning of phase 3 (day 0), pens of pigs were blocked by pen weight and room. Within blocks, pens were allotted randomly to 1 of 4 treatments with 15 replications per treatment. Treatments consisted of a dose-titration of blending increasing amounts of late finishing feed (0, 2.75, 5.5, and 8.25 lb per pig, corresponding to 0, 3, 6, and 9 tons per 2,200-head barn, respectively) into a phase 3 nursery diet. Diet changes to the remaining phases were based on feed budgets. From day 0 to 14, average daily gain (ADG) was unaffected as finishing feed budget increased from 0 to 2.75 lb/pig but decreased thereafter (quadratic, P = 0.090). Average daily feed intake (ADFI) was unaffected, but feed-to-gain ratio (F/G) worsened (linear, P \u3c 0.001) as more finishing feed was blended into phase 3 nursery diet. From day 14 to 28, pigs previously fed increasing levels of late finishing feed had improved (linear, P \u3c 0.05) ADG and F/G, but unaffected ADFI. Overall (day 0 to 28), blending increasing amounts of finishing feed with phase 3 nursery diet decreased ADG (linear, P = 0.050) and tended to decrease (linear, P \u3c 0.07) ADFI and final BW. However, there was no evidence of any linear or quadratic effects of increasing finishing feed budgets on overall F/G. Feed cost, gain value, and feed cost per lb of gain decreased (linear, P \u3c 0.05) as finishing feed budget increased from 0 to 8.25 lb/pig. However, income over feed cost was not different among treatments. In conclusion, feeding increasing amounts of late finishing feed to phase 3 (28 lb) nursery pigs decreased overall ADG and ADFI, but did not affect income over feed cost
Tensor completion in hierarchical tensor representations
Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its
Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral
How much randomness can be extracted from memoryless Shannon entropy sources?
We revisit the classical problem: given a memoryless source having a certain amount of Shannon Entropy, how many random bits can be extracted? This question appears in works studying random number generators built from physical entropy sources.
Some authors use a heuristic estimate obtained from the Asymptotic Equipartition Property, which yields roughly extractable bits, where is the total Shannon entropy amount. However the best known precise form gives only , where is the distance of the extracted bits from uniform. In this paper we show a matching upper bound. Therefore, the loss of bits is necessary. As we show, this theoretical bound is of practical relevance. Namely, applying the imprecise AEP heuristic to a mobile phone accelerometer one might overestimate extractable entropy even by , no matter what the extractor is. Thus, the ``AEP extracting heuristic\u27\u27 should not be used without taking the precise error into account
Effect of Sow Lactation Crate Size on Litter Performance and Survivability
A total of 529 litters of pigs (PIC TR4 × (Fast LW × PIC L02)) were used to examine the effect of sow lactation crate size on nursing pig litter performance and survivability. The sow portion of the farrowing crate was maintained at a constant length and width of 7.4 and 2.0 ft, respectively. To form the treatments, crate width was adjusted accordingly, taking space away from one sow’s crate to give it to another allowing for 3 crate widths: 4.8 (small), 5.4 (medium), and 6.0 ft (large). This allowed for blocks of 3 crates, where each treatment was represented. Sows were loaded into individual lactation crates at random, balancing for parity across treatments. Cross fostering occurred within 24 h of farrowing prior to obtaining d 1 litter weight in effort to equalize litter size across treatments. Data were analyzed using generalized mixed models where treatment was a fixed effect and block was a random effect. Born alive, piglets weaned, and pre-weaning mortality, were all fitted using a binomial distribution.Regardless of treatment, there was no evidence of differences in total piglets born (14.3), percentage of piglets born alive (92.3%), d 1 litter weight after cross fostering (40.0 lb), litter weaning weight (145.9 lb), litter ADG (5.4 lb/d), or number of piglets weaned (10.7). In addition, no evidence for differences was observed in the percentage piglets weaned (80.9%) or pre-weaning mortality (19.1%). In conclusion, increasing lactation crate size did not impact litter performance or pig survivability in this study
- …